Adversarial attack vulnerability of medical image analysis systems: Unexplored factors
نویسندگان
چکیده
Adversarial attacks are considered a potentially serious security threat for machine learning systems. Medical image analysis (MedIA) systems have recently been argued to be vulnerable adversarial due strong financial incentives and the associated technological infrastructure. In this paper, we study previously unexplored factors affecting attack vulnerability of deep MedIA in three medical domains: ophthalmology, radiology, pathology. We focus on black-box settings, which attacker does not full access target model usually uses another model, commonly referred as surrogate craft examples. consider most realistic scenario Firstly, effect weight initialization (ImageNet vs. random) transferability from model. Secondly, influence differences development data between models. further interaction with architecture. All experiments were done perturbation degree tuned ensure maximal at minimal visual perceptibility attacks. Our show that pre-training may dramatically increase examples, even when surrogate's architectures different: larger performance gain using pre-training, transferability. Differences models considerably decrease attack; is amplified by difference believe these should developing security-critical planned deployed clinical practice.
منابع مشابه
Adversarial vulnerability for any classifier
Despite achieving impressive and often superhuman performance on multiple benchmarks, state-of-the-art deep networks remain highly vulnerable to perturbations: adding small, imperceptible, adversarial perturbations can lead to very high error rates. Provided the data distribution is defined using a generative model mapping latent vectors to datapoints in the distribution, we prove that no class...
متن کاملAnalysis of Nonautonomous Adversarial Systems
Generative adversarial networks are used to generate images but still their convergence properties are not well understood. There have been a few studies who intended to investigate the stability properties of GANs as a dynamical system. This short writing can be seen in that direction. Among the proposed methods for stabilizing training of GANs, β-GAN was the first who proposed a complete anne...
متن کاملMedical Image Synthesis with Context-Aware Generative Adversarial Networks
Computed tomography (CT) is critical for various clinical applications, e.g., radiotherapy treatment planning and also PET attenuation correction. However, CT exposes radiation during acquisition, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve any radiations. Therefore, recently, researchers are greatly motivated to ...
متن کاملGenerative Adversarial Network based Synthesis for Supervised Medical Image Segmentation*
Modern deep learning methods achieve state-ofthe-art results in many computer vision tasks. While these methods perform well when trained on large datasets, deep learning methods suffer from overfitting and lack of generalization given smaller datasets. Especially in medical image analysis, acquisition of both imaging data and corresponding ground-truth annotations (e.g. pixel-wise segmentation...
متن کاملAttack vulnerability of complex networks.
We study the response of complex networks subject to attacks on vertices and edges. Several existing complex network models as well as real-world networks of scientific collaborations and Internet traffic are numerically investigated, and the network performance is quantitatively measured by the average inverse geodesic length and the size of the largest connected subgraph. For each case of att...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Medical Image Analysis
سال: 2021
ISSN: ['1361-8423', '1361-8431', '1361-8415']
DOI: https://doi.org/10.1016/j.media.2021.102141